Goto

Collaborating Authors

 saman sarraf


Evaluating Generative AI-Enhanced Content: A Conceptual Framework Using Qualitative, Quantitative, and Mixed-Methods Approaches

Sarraf, Saman

arXiv.org Artificial Intelligence

Generative AI (GenAI) has revolutionized content generation, offering transformative capabilities for improving language coherence, readability, and overall quality. This manuscript explores the application of qualitative, quantitative, and mixed-methods research approaches to evaluate the performance of GenAI models in enhancing scientific writing. Using a hypothetical use case involving a collaborative medical imaging manuscript, we demonstrate how each method provides unique insights into the impact of GenAI. Qualitative methods gather in-depth feedback from expert reviewers, analyzing their responses using thematic analysis tools to capture nuanced improvements and identify limitations. Quantitative approaches employ automated metrics such as BLEU, ROUGE, and readability scores, as well as user surveys, to objectively measure improvements in coherence, fluency, and structure. Mixed-methods research integrates these strengths, combining statistical evaluations with detailed qualitative insights to provide a comprehensive assessment. These research methods enable quantifying improvement levels in GenAI-generated content, addressing critical aspects of linguistic quality and technical accuracy. They also offer a robust framework for benchmarking GenAI tools against traditional editing processes, ensuring the reliability and effectiveness of these technologies. By leveraging these methodologies, researchers can evaluate the performance boost driven by GenAI, refine its applications, and guide its responsible adoption in high-stakes domains like healthcare and scientific research. This work underscores the importance of rigorous evaluation frameworks for advancing trust and innovation in GenAI.


ChatGPT Application In Summarizing An Evolution Of Deep Learning Techniques In Imaging: A Qualitative Study

Sarraf, Arman, Abbaspour, Amirabbas

arXiv.org Artificial Intelligence

Text summarization is a pivotal application of NLP that condenses lengthy documents or articles into shorter, coherent representations while retaining the essential information. Through various algorithms and techniques, NLP models identify significant sentences, key phrases, or essential concepts within the text to generate concise summaries. Extractive summarization involves selecting and stitching together important segments directly from the original text, often based on relevance, importance, or frequency of occurrence. On the other hand, abstractive summarization goes beyond extraction, generating novel sentences that convey the core meaning while potentially rephrasing and restructuring the content. NLP-powered summarization systems play a crucial role in information retrieval, aiding in quick comprehension and accessibility of vast amounts of text across diverse domains such as news articles, research papers, and legal documents. ChatGPT boasts impressive text summarization capabilities, harnessing its advanced Natural Language Processing (NLP) architecture to distill lengthy conversations, articles, or documents into concise, coherent summaries. Leveraging its vast understanding of language semantics, context, and syntax, ChatGPT effectively identifies key points, essential information, and significant passages within the text. Its summarization prowess encompasses extractive and abstractive techniques, allowing it to select important segments directly from the input while generating novel, coherent sentences that capture the essence of the content.


Formulating A Strategic Plan Based On Statistical Analyses And Applications For Financial Companies Through A Real-World Use Case

Sarraf, Saman

arXiv.org Artificial Intelligence

Formulating a strategic plan aligned with a company's business scope allows the company to explore data-driven ways of business improvement and risk mitigation quantitively while utilizing collected data to perform statistical applications. The company's business leadership generally organizes joint meetings with internal or external data analysis teams to design a plan for executing business-related statistical analysis. Such projects demonstrate that the company should invest in what areas and adjust the budget for business verticals with low revenue. Furthermore, statistical applications can determine the logic of how to improve staff performance in the workplace. LendingClub, as a peer-to-peer lending company, offers loans and investment products in different sectors, including personal and business loans, automobile loans, and health-related financing loans. LendingClub's business model comprises three primary players: borrowers, investors, and portfolios for issued loans. LendingClub is about expanding the statistical analytics that consists of infrastructure and software algorithm applications to develop two meaningful solutions ultimately: a) estimating durations in which clients will pay off loans; and b) 30-minute loan approval decision-making. To implement these two capabilities, the company has collected data on loans that were granted or rejected over 12 years, including 145 attributes and more than 2 million observations, where 32 features have no missing values across the dataset.